End-to-end text-to-speech (TTS) systems have been developed for European languages like English and Spanish with state-of-the-art speech quality, prosody, and naturalness. However, development of end-to-end TTS for Indian languages is lagging behind in terms of quality. The challenges involved in such a task are: 1) scarcity of quality training data; 2) low efficiency during training and inference; 3) slow convergence in the case of large vocabulary size. In our work reported in this paper, we have investigated the use of fine-tuning the English-pretrained Tacotron2 model with limited Sanskrit data to synthesize natural sounding speech in Sanskrit in low resource settings. Our experiments show encouraging results, achieving an overall MOS of 3.38 from 37 evaluators with good Sanskrit spoken knowledge. This is really a very good result, considering the fact that the speech data we have used is of duration 2.5 hours only.
translated by 谷歌翻译
Searching long egocentric videos with natural language queries (NLQ) has compelling applications in augmented reality and robotics, where a fluid index into everything that a person (agent) has seen before could augment human memory and surface relevant information on demand. However, the structured nature of the learning problem (free-form text query inputs, localized video temporal window outputs) and its needle-in-a-haystack nature makes it both technically challenging and expensive to supervise. We introduce Narrations-as-Queries (NaQ), a data augmentation strategy that transforms standard video-text narrations into training data for a video query localization model. Validating our idea on the Ego4D benchmark, we find it has tremendous impact in practice. NaQ improves multiple top models by substantial margins (even doubling their accuracy), and yields the very best results to date on the Ego4D NLQ challenge, soundly outperforming all challenge winners in the CVPR and ECCV 2022 competitions and topping the current public leaderboard. Beyond achieving the state-of-the-art for NLQ, we also demonstrate unique properties of our approach such as gains on long-tail object queries, and the ability to perform zero-shot and few-shot NLQ.
translated by 谷歌翻译
Entity matching in Customer 360 is the task of determining if multiple records represent the same real world entity. Entities are typically people, organizations, locations, and events represented as attributed nodes in a graph, though they can also be represented as records in relational data. While probabilistic matching engines and artificial neural network models exist for this task, explaining entity matching has received less attention. In this demo, we present our Explainable Entity Matching (xEM) system and discuss the different AI/ML considerations that went into its implementation.
translated by 谷歌翻译
Collecting sufficient labeled data for spoken language understanding (SLU) is expensive and time-consuming. Recent studies achieved promising results by using pre-trained models in low-resource scenarios. Inspired by this, we aim to ask: which (if any) pre-training strategies can improve performance across SLU benchmarks? To answer this question, we employ four types of pre-trained models and their combinations for SLU. We leverage self-supervised speech and language models (LM) pre-trained on large quantities of unpaired data to extract strong speech and text representations. We also explore using supervised models pre-trained on larger external automatic speech recognition (ASR) or SLU corpora. We conduct extensive experiments on the SLU Evaluation (SLUE) benchmark and observe self-supervised pre-trained models to be more powerful, with pre-trained LM and speech models being most beneficial for the Sentiment Analysis and Named Entity Recognition task, respectively.
translated by 谷歌翻译
Through their transfer learning abilities, highly-parameterized large pre-trained language models have dominated the NLP landscape for a multitude of downstream language tasks. Though linguistically proficient, the inability of these models to incorporate the learning of non-linguistic entities (numerals and arithmetic reasoning) limits their usage for tasks that require numeric comprehension or strict mathematical reasoning. However, as we illustrate in this paper, building a general purpose language model that also happens to be proficient in mathematical reasoning is not as straight-forward as training it on a numeric dataset. In this work, we develop a novel framework that enables language models to be mathematically proficient while retaining their linguistic prowess. Specifically, we offer information-theoretic interventions to overcome the catastrophic forgetting of linguistic skills that occurs while injecting non-linguistic skills into language models.
translated by 谷歌翻译
We present the Habitat-Matterport 3D Semantics (HM3DSEM) dataset. HM3DSEM is the largest dataset of 3D real-world spaces with densely annotated semantics that is currently available to the academic community. It consists of 142,646 object instance annotations across 216 3D spaces and 3,100 rooms within those spaces. The scale, quality, and diversity of object annotations far exceed those of prior datasets. A key difference setting apart HM3DSEM from other datasets is the use of texture information to annotate pixel-accurate object boundaries. We demonstrate the effectiveness of HM3DSEM dataset for the Object Goal Navigation task using different methods. Policies trained using HM3DSEM perform outperform those trained on prior datasets. Introduction of HM3DSEM in the Habitat ObjectNav Challenge lead to an increase in participation from 400 submissions in 2021 to 1022 submissions in 2022.
translated by 谷歌翻译
ML-AS-A-Service继续增长,对非常强大的隐私保证的需求也在继续增长。安全推断已成为潜在的解决方案,其中加密原始图允许推理不向用户向用户揭示用户的输入或模型的权重。例如,模型提供商可以是一家诊断公司,该公司已经培训了一种最先进的Densenet-121模型来解释胸部X射线,并且用户可以在医院成为患者。尽管对于这种环境,确保推理原则上是可行的,但没有现有的技术使其大规模实用。 Cryptflow2框架提供了一种潜在的解决方案,其能力自动,正确地将清晰文本推理转换为安全模型的推断。但是,从Cryptflow2产生的安全推断在不切实际上很昂贵:在Densenet-121上解释单个X射线需要几乎3TB的通信。在本文中,我们解决了针对三项贡献的安全推断效率低下的重大挑战。首先,我们证明安全推理中的主要瓶颈是大型线性层,可以通过选择网络骨干的选择来优化,并使用用于有效的清晰文本推理开发的操作员。这一发现和强调与许多最近的作品偏离,这些作品着重于在执行较小网络的安全推断时优化非线性激活层。其次,基于对瓶颈卷积层的分析,我们设计了一个更有效的倒入替代品的X操作器。第三,我们表明,快速的Winograd卷积算法进一步提高了安全推断的效率。结合使用,这三个优化被证明对在CHEXPERT数据集中训练的X射线解释问题非常有效。
translated by 谷歌翻译
在过去的几年中,从面部视频中对心脏脉搏的测量已成为对研究的有趣追求。这主要是由于以非侵入性方式获得个人心率的重要性越来越重要,这对于游戏和医疗行业的应用可能非常有用。在过去的几年中,研究的另一个工具领域是深度学习的出现,并使用深度神经网络来增强任务绩效。在这项工作中,我们建议使用有效的卷积网络来准确测量低分辨率面部视频的用户心率。此外,为了确保我们能够实时获得心律,我们通过修剪深度学习模型来压缩深度学习模型,从而减少其内存足迹。我们在MAHNOB数据集上基准了方法的性能,并在多种方法中比较了其性能。
translated by 谷歌翻译
面部视频中心率的估计在医疗和健身行业中有许多应用。此外,它在游戏领域也变得有用。已经提出了几种方法,可以从面部视频中无缝获得心率,但是这些方法在处理运动和照明工件方面存在问题。在这项工作中,我们使用用户的光谱反射率提出了一个可靠的人力资源估计框架,这使运动和照明干扰变得强大。我们采用基于学习的深度框架,例如更快的RCNNS来执行面部检测,而不是先前方法使用的中提琴琼斯算法。我们在Mahnob HCI数据集上评估了我们的方法,发现所提出的方法能够超越先前的方法。从面部视频中估计心率在医疗和健身行业中有许多应用。此外,它在游戏领域也变得有用。已经提出了几种方法,可以从面部视频中无缝获得心率,但是这些方法在处理运动和照明工件方面存在问题。在这项工作中,我们使用用户的光谱反射率提出了一个可靠的人力资源估计框架,这使运动和照明干扰变得强大。我们采用基于学习的深度框架,例如更快的RCNNS来执行面部检测,而不是先前方法使用的中提琴算法。我们在MAHNOB HCI数据集上评估了我们的方法,发现所提出的方法能够超过以前的方法。
translated by 谷歌翻译
空间优化问题(SOP)的特征是管理决策变量,目标和/或约束功能的空间关系。在本文中,我们关注一种称为空间分区的特定类型的SOP,这是一个组合问题,这是由于存在离散空间单元。精确的优化方法不会随着问题的大小而扩展,尤其是在可行的时间限制内。这促使我们开发基于人群的元启发式学来解决此类SOP。但是,这些基于人群的方法采用的搜索操作员主要是为实参与者连续优化问题而设计的。为了使这些方法适应SOP,我们将域知识应用于设计空间感知的搜索操作员,以在保留空间约束的同时有效地通过离散搜索空间进行有效搜索。为此,我们提出了一种简单而有效的算法,称为基于群的空间模因算法(空间),并在学校(RE)区域问题上进行测试。对现实世界数据集进行了详细的实验研究,以评估空间的性能。此外,进行消融研究以了解空间各个组成部分的作用。此外,我们讨论空间〜如何在现实生活计划过程及其对不同方案的适用性并激发未来的研究方向有帮助。
translated by 谷歌翻译